Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 104
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38613820

RESUMO

OBJECTIVES: Phenotyping is a core task in observational health research utilizing electronic health records (EHRs). Developing an accurate algorithm demands substantial input from domain experts, involving extensive literature review and evidence synthesis. This burdensome process limits scalability and delays knowledge discovery. We investigate the potential for leveraging large language models (LLMs) to enhance the efficiency of EHR phenotyping by generating high-quality algorithm drafts. MATERIALS AND METHODS: We prompted four LLMs-GPT-4 and GPT-3.5 of ChatGPT, Claude 2, and Bard-in October 2023, asking them to generate executable phenotyping algorithms in the form of SQL queries adhering to a common data model (CDM) for three phenotypes (ie, type 2 diabetes mellitus, dementia, and hypothyroidism). Three phenotyping experts evaluated the returned algorithms across several critical metrics. We further implemented the top-rated algorithms and compared them against clinician-validated phenotyping algorithms from the Electronic Medical Records and Genomics (eMERGE) network. RESULTS: GPT-4 and GPT-3.5 exhibited significantly higher overall expert evaluation scores in instruction following, algorithmic logic, and SQL executability, when compared to Claude 2 and Bard. Although GPT-4 and GPT-3.5 effectively identified relevant clinical concepts, they exhibited immature capability in organizing phenotyping criteria with the proper logic, leading to phenotyping algorithms that were either excessively restrictive (with low recall) or overly broad (with low positive predictive values). CONCLUSION: GPT versions 3.5 and 4 are capable of drafting phenotyping algorithms by identifying relevant clinical criteria aligned with a CDM. However, expertise in informatics and clinical experience is still required to assess and further refine generated algorithms.

2.
J Biomed Inform ; 153: 104640, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38608915

RESUMO

Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence. The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information. Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task. However, developing accountable, fair, and inclusive models remains a complicated undertaking. In this perspective, we discuss the trustworthiness of generative AI in the context of automated summarization of medical evidence.

3.
NPJ Digit Med ; 7(1): 46, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38409350

RESUMO

Drug repurposing represents an attractive alternative to the costly and time-consuming process of new drug development, particularly for serious, widespread conditions with limited effective treatments, such as Alzheimer's disease (AD). Emerging generative artificial intelligence (GAI) technologies like ChatGPT offer the promise of expediting the review and summary of scientific knowledge. To examine the feasibility of using GAI for identifying drug repurposing candidates, we iteratively tasked ChatGPT with proposing the twenty most promising drugs for repurposing in AD, and tested the top ten for risk of incident AD in exposed and unexposed individuals over age 65 in two large clinical datasets: (1) Vanderbilt University Medical Center and (2) the All of Us Research Program. Among the candidates suggested by ChatGPT, metformin, simvastatin, and losartan were associated with lower AD risk in meta-analysis. These findings suggest GAI technologies can assimilate scientific insights from an extensive Internet-based search space, helping to prioritize drug repurposing candidates and facilitate the treatment of diseases.

4.
medRxiv ; 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38196578

RESUMO

Objectives: Phenotyping is a core task in observational health research utilizing electronic health records (EHRs). Developing an accurate algorithm demands substantial input from domain experts, involving extensive literature review and evidence synthesis. This burdensome process limits scalability and delays knowledge discovery. We investigate the potential for leveraging large language models (LLMs) to enhance the efficiency of EHR phenotyping by generating high-quality algorithm drafts. Materials and Methods: We prompted four LLMs-GPT-4 and GPT-3.5 of ChatGPT, Claude 2, and Bard-in October 2023, asking them to generate executable phenotyping algorithms in the form of SQL queries adhering to a common data model (CDM) for three phenotypes (i.e., type 2 diabetes mellitus, dementia, and hypothyroidism). Three phenotyping experts evaluated the returned algorithms across several critical metrics. We further implemented the top-rated algorithms and compared them against clinician-validated phenotyping algorithms from the Electronic Medical Records and Genomics (eMERGE) network. Results: GPT-4 and GPT-3.5 exhibited significantly higher overall expert evaluation scores in instruction following, algorithmic logic, and SQL executability, when compared to Claude 2 and Bard. Although GPT-4 and GPT-3.5 effectively identified relevant clinical concepts, they exhibited immature capability in organizing phenotyping criteria with the proper logic, leading to phenotyping algorithms that were either excessively restrictive (with low recall) or overly broad (with low positive predictive values). Conclusion: GPT versions 3.5 and 4 are capable of drafting phenotyping algorithms by identifying relevant clinical criteria aligned with a CDM. However, expertise in informatics and clinical experience is still required to assess and further refine generated algorithms.

5.
JAMA Netw Open ; 6(10): e2336383, 2023 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-37812421

RESUMO

Importance: US health professionals devote a large amount of effort to engaging with patients' electronic health records (EHRs) to deliver care. It is unknown whether patients with different racial and ethnic backgrounds receive equal EHR engagement. Objective: To investigate whether there are differences in the level of health professionals' EHR engagement for hospitalized patients according to race or ethnicity during inpatient care. Design, Setting, and Participants: This cross-sectional study analyzed EHR access log data from 2 major medical institutions, Vanderbilt University Medical Center (VUMC) and Northwestern Medicine (NW Medicine), over a 3-year period from January 1, 2018, to December 31, 2020. The study included all adult patients (aged ≥18 years) who were discharged alive after hospitalization for at least 24 hours. The data were analyzed between August 15, 2022, and March 15, 2023. Exposures: The actions of health professionals in each patient's EHR were based on EHR access log data. Covariates included patients' demographic information, socioeconomic characteristics, and comorbidities. Main Outcomes and Measures: The primary outcome was the quantity of EHR engagement, as defined by the average number of EHR actions performed by health professionals within a patient's EHR per hour during the patient's hospital stay. Proportional odds logistic regression was applied based on outcome quartiles. Results: A total of 243 416 adult patients were included from VUMC (mean [SD] age, 51.7 [19.2] years; 54.9% female and 45.1% male; 14.8% Black, 4.9% Hispanic, 77.7% White, and 2.6% other races and ethnicities) and NW Medicine (mean [SD] age, 52.8 [20.6] years; 65.2% female and 34.8% male; 11.7% Black, 12.1% Hispanic, 69.2% White, and 7.0% other races and ethnicities). When combining Black, Hispanic, or other race and ethnicity patients into 1 group, these patients were significantly less likely to receive a higher amount of EHR engagement compared with White patients (adjusted odds ratios, 0.86 [95% CI, 0.83-0.88; P < .001] for VUMC and 0.90 [95% CI, 0.88-0.92; P < .001] for NW Medicine). However, a reduction in this difference was observed from 2018 to 2020. Conclusions and Relevance: In this cross-sectional study of inpatient EHR engagement, the findings highlight differences in how health professionals distribute their efforts to patients' EHRs, as well as a method to measure these differences. Further investigations are needed to determine whether and how EHR engagement differences are correlated with health care outcomes.


Assuntos
Registros Eletrônicos de Saúde , Etnicidade , Disparidades em Assistência à Saúde , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Negro ou Afro-Americano , Estudos Transversais , Registros Eletrônicos de Saúde/estatística & dados numéricos , Brancos , Hospitalização/estatística & dados numéricos , Atitude do Pessoal de Saúde , Idoso , Disparidades em Assistência à Saúde/etnologia , Disparidades em Assistência à Saúde/estatística & dados numéricos , Fatores de Tempo
6.
medRxiv ; 2023 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-37461512

RESUMO

Drug repurposing represents an attractive alternative to the costly and time-consuming process of new drug development, particularly for serious, widespread conditions with limited effective treatments, such as Alzheimer's disease (AD). Emerging generative artificial intelligence (GAI) technologies like ChatGPT offer the promise of expediting the review and summary of scientific knowledge. To examine the feasibility of using GAI for identifying drug repurposing candidates, we iteratively tasked ChatGPT with proposing the twenty most promising drugs for repurposing in AD, and tested the top ten for risk of incident AD in exposed and unexposed individuals over age 65 in two large clinical datasets: 1) Vanderbilt University Medical Center and 2) the All of Us Research Program. Among the candidates suggested by ChatGPT, metformin, simvastatin, and losartan were associated with lower AD risk in meta-analysis. These findings suggest GAI technologies can assimilate scientific insights from an extensive Internet-based search space, helping to prioritize drug repurposing candidates and facilitate the treatment of diseases.

7.
IEEE Trans Nanobioscience ; 22(4): 808-817, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37289605

RESUMO

Sharing individual-level pandemic data is essential for accelerating the understanding of a disease. For example, COVID-19 data have been widely collected to support public health surveillance and research. In the United States, these data are typically de-identified before publication to protect the privacy of the corresponding individuals. However, current data publishing approaches for this type of data, such as those adopted by the U.S. Centers for Disease Control and Prevention (CDC), have not flexed over time to account for the dynamic nature of infection rates. Thus, the policies generated by these strategies have the potential to both raise privacy risks or overprotect the data and impair the data utility (or usability). To optimize the tradeoff between privacy risk and data utility, we introduce a game theoretic model that adaptively generates policies for the publication of individual-level COVID-19 data according to infection dynamics. We model the data publishing process as a two-player Stackelberg game between a data publisher and a data recipient and then search for the best strategy for the publisher. In this game, we consider 1) average performance of predicting future case counts; and 2) mutual information between the original data and the released data. We use COVID-19 case data from Vanderbilt University Medical Center from March 2020 to December 2021 to demonstrate the effectiveness of the new model. The results indicate that the game theoretic model outperforms all state-of-the-art baseline approaches, including those adopted by CDC, while maintaining low privacy risk. We further perform an extensive sensitivity analyses to show that our findings are robust to order-of-magnitude parameter fluctuations.


Assuntos
COVID-19 , Privacidade , Humanos , Estados Unidos/epidemiologia , Pandemias , COVID-19/epidemiologia , Editoração
8.
Genome Res ; 33(7): 1113-1123, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37217251

RESUMO

The collection and sharing of genomic data are becoming increasingly commonplace in research, clinical, and direct-to-consumer settings. The computational protocols typically adopted to protect individual privacy include sharing summary statistics, such as allele frequencies, or limiting query responses to the presence/absence of alleles of interest using web services called Beacons. However, even such limited releases are susceptible to likelihood ratio-based membership-inference attacks. Several approaches have been proposed to preserve privacy, which either suppress a subset of genomic variants or modify query responses for specific variants (e.g., adding noise, as in differential privacy). However, many of these approaches result in a significant utility loss, either suppressing many variants or adding a substantial amount of noise. In this paper, we introduce optimization-based approaches to explicitly trade off the utility of summary data or Beacon responses and privacy with respect to membership-inference attacks based on likelihood ratios, combining variant suppression and modification. We consider two attack models. In the first, an attacker applies a likelihood ratio test to make membership-inference claims. In the second model, an attacker uses a threshold that accounts for the effect of the data release on the separation in scores between individuals in the data set and those who are not. We further introduce highly scalable approaches for approximately solving the privacy-utility tradeoff problem when information is in the form of either summary statistics or presence/absence queries. Finally, we show that the proposed approaches outperform the state of the art in both utility and privacy through an extensive evaluation with public data sets.


Assuntos
Disseminação de Informação , Privacidade , Humanos , Disseminação de Informação/métodos , Genômica , Frequência do Gene , Alelos
9.
Sci Rep ; 13(1): 6932, 2023 04 28.
Artigo em Inglês | MEDLINE | ID: mdl-37117219

RESUMO

As recreational genomics continues to grow in its popularity, many people are afforded the opportunity to share their genomes in exchange for various services, including third-party interpretation (TPI) tools, to understand their predisposition to health problems and, based on genome similarity, to find extended family members. At the same time, these services have increasingly been reused by law enforcement to track down potential criminals through family members who disclose their genomic information. While it has been observed that many potential users shy away from such data sharing when they learn that their privacy cannot be assured, it remains unclear how potential users' valuations of the service will affect a population's behavior. In this paper, we present a game theoretic framework to model interdependent privacy challenges in genomic data sharing online. Through simulations, we find that in addition to the boundary cases when (1) no player and (2) every player joins, there exist pure-strategy Nash equilibria when a relatively small portion of players choose to join the genomic database. The result is consistent under different parametric settings. We further examine the stability of Nash equilibria and illustrate that the only equilibrium that is resistant to a random dropping of players is when all players join the genomic database. Finally, we show that when players consider the impact that their data sharing may have on their relatives, the only pure strategy Nash equilibria are when either no player or every player shares their genomic data.


Assuntos
Hepatopatia Gordurosa não Alcoólica , Privacidade , Humanos , Disseminação de Informação , Família , Genômica
10.
J Am Med Inform Assoc ; 30(5): 907-914, 2023 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-36809550

RESUMO

OBJECTIVE: The All of Us Research Program makes individual-level data available to researchers while protecting the participants' privacy. This article describes the protections embedded in the multistep access process, with a particular focus on how the data was transformed to meet generally accepted re-identification risk levels. METHODS: At the time of the study, the resource consisted of 329 084 participants. Systematic amendments were applied to the data to mitigate re-identification risk (eg, generalization of geographic regions, suppression of public events, and randomization of dates). We computed the re-identification risk for each participant using a state-of-the-art adversarial model specifically assuming that it is known that someone is a participant in the program. We confirmed the expected risk is no greater than 0.09, a threshold that is consistent with guidelines from various US state and federal agencies. We further investigated how risk varied as a function of participant demographics. RESULTS: The results indicated that 95th percentile of the re-identification risk of all the participants is below current thresholds. At the same time, we observed that risk levels were higher for certain race, ethnic, and genders. CONCLUSIONS: While the re-identification risk was sufficiently low, this does not imply that the system is devoid of risk. Rather, All of Us uses a multipronged data protection strategy that includes strong authentication practices, active monitoring of data misuse, and penalization mechanisms for users who violate terms of service.


Assuntos
Saúde da População , Humanos , Masculino , Feminino , Privacidade , Gestão de Riscos , Segurança Computacional , Pesquisadores
11.
AMIA Annu Symp Proc ; 2023: 1047-1056, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38222326

RESUMO

Deep learning continues to rapidly evolve and is now demonstrating remarkable potential for numerous medical prediction tasks. However, realizing deep learning models that generalize across healthcare organizations is challenging. This is due, in part, to the inherent siloed nature of these organizations and patient privacy requirements. To address this problem, we illustrate how split learning can enable collaborative training of deep learning models across disparate and privately maintained health datasets, while keeping the original records and model parameters private. We introduce a new privacy-preserving distributed learning framework that offers a higher level of privacy compared to conventional federated learning. We use several biomedical imaging and electronic health record (EHR) datasets to show that deep learning models trained via split learning can achieve highly similar performance to their centralized and federated counterparts while greatly improving computational efficiency and reducing privacy risks.


Assuntos
Aprendizado Profundo , Informática Médica , Humanos , Registros Eletrônicos de Saúde , Privacidade
12.
Nat Commun ; 13(1): 7609, 2022 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-36494374

RESUMO

Synthetic health data have the potential to mitigate privacy concerns in supporting biomedical research and healthcare applications. Modern approaches for data generation continue to evolve and demonstrate remarkable potential. Yet there is a lack of a systematic assessment framework to benchmark methods as they emerge and determine which methods are most appropriate for which use cases. In this work, we introduce a systematic benchmarking framework to appraise key characteristics with respect to utility and privacy metrics. We apply the framework to evaluate synthetic data generation methods for electronic health records data from two large academic medical centers with respect to several use cases. The results illustrate that there is a utility-privacy tradeoff for sharing synthetic health data and further indicate that no method is unequivocally the best on all criteria in each use case, which makes it evident why synthetic data generation methods need to be assessed in context.


Assuntos
Pesquisa Biomédica , Registros Eletrônicos de Saúde , Privacidade , Benchmarking
13.
mSphere ; 7(5): e0025722, 2022 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-36173112

RESUMO

Accurate, highly specific immunoassays for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are needed to evaluate seroprevalence. This study investigated the concordance of results across four immunoassays targeting different antigens for sera collected at the beginning of the SARS-CoV-2 pandemic in the United States. Specimens from All of Us participants contributed between January and March 2020 were tested using the Abbott Architect SARS-CoV-2 IgG (immunoglobulin G) assay (Abbott) and the EuroImmun SARS-CoV-2 enzyme-linked immunosorbent assay (ELISA) (EI). Participants with discordant results, participants with concordant positive results, and a subset of concordant negative results by Abbott and EI were also tested using the Roche Elecsys anti-SARS-CoV-2 (IgG) test (Roche) and the Ortho-Clinical Diagnostics Vitros anti-SARS-CoV-2 IgG test (Ortho). The agreement and 95% confidence intervals were estimated for paired assay combinations. SARS-CoV-2 antibody concentrations were quantified for specimens with at least two positive results across four immunoassays. Among the 24,079 participants, the percent agreement for the Abbott and EI assays was 98.8% (95% confidence interval, 98.7%, 99%). Of the 490 participants who were also tested by Ortho and Roche, the probability-weighted percentage of agreement (95% confidence interval) between Ortho and Roche was 98.4% (97.9%, 98.9%), that between EI and Ortho was 98.5% (92.9%, 99.9%), that between Abbott and Roche was 98.9% (90.3%, 100.0%), that between EI and Roche was 98.9% (98.6%, 100.0%), and that between Abbott and Ortho was 98.4% (91.2%, 100.0%). Among the 32 participants who were positive by at least 2 immunoassays, 21 had quantifiable anti-SARS-CoV-2 antibody concentrations by research assays. The results across immunoassays revealed concordance during a period of low prevalence. However, the frequency of false positivity during a period of low prevalence supports the use of two sequentially performed tests for unvaccinated individuals who are seropositive by the first test. IMPORTANCE What is the agreement of commercial SARS-CoV-2 immunoglobulin G (IgG) assays during a time of low coronavirus disease 2019 (COVID-19) prevalence and no vaccine availability? Serological tests produced concordant results in a time of low SARS-CoV-2 prevalence and no vaccine availability, driven largely by the proportion of samples that were negative by two immunoassays. The CDC recommends two sequential tests for positivity for future pandemic preparedness. In a subset analysis, quantified antinucleocapsid and antispike SARS-CoV-2 IgG antibodies do not suggest the need to specify the antigen targets of the sequential assays in the CDC's recommendation because false positivity varied as much between assays targeting the same antigen as it did between assays targeting different antigens.


Assuntos
COVID-19 , Saúde da População , Humanos , SARS-CoV-2 , COVID-19/diagnóstico , COVID-19/epidemiologia , Prevalência , Estudos Soroepidemiológicos , Sensibilidade e Especificidade , Anticorpos Antivirais , Imunoglobulina G
14.
J Am Med Inform Assoc ; 30(1): 155-160, 2022 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-36048014

RESUMO

The Supreme Court recently overturned settled case law that affirmed a pregnant individual's Constitutional right to an abortion. While many states will commit to protect this right, a large number of others have enacted laws that limit or outright ban abortion within their borders. Additional efforts are underway to prevent pregnant individuals from seeking care outside their home state. These changes have significant implications for delivery of healthcare as well as for patient-provider confidentiality. In particular, these laws will influence how information is documented in and accessed via electronic health records and how personal health applications are utilized in the consumer domain. We discuss how these changes may lead to confusion and conflict regarding use of health information, both within and across state lines, why current health information security practices may need to be reconsidered, and what policy options may be possible to protect individuals' health information.


Assuntos
Aborto Induzido , Privacidade , Gravidez , Feminino , Humanos , Confidencialidade/legislação & jurisprudência , Previsões , Atenção à Saúde
15.
J Am Med Inform Assoc ; 29(11): 1890-1898, 2022 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-35927974

RESUMO

OBJECTIVE: Synthetic data are increasingly relied upon to share electronic health record (EHR) data while maintaining patient privacy. Current simulation methods can generate longitudinal data, but the results are unreliable for several reasons. First, the synthetic data drifts from the real data distribution over time. Second, the typical approach to quality assessment, which is based on the extent to which real records can be distinguished from synthetic records using a critic model, often fails to recognize poor simulation results. In this article, we introduce a longitudinal simulation framework, called LS-EHR, which addresses these issues. MATERIALS AND METHODS: LS-EHR enhances simulation through conditional fuzzing and regularization, rejection sampling, and prior knowledge embedding. We compare LS-EHR to the state-of-the-art using data from 60 000 EHRs from Vanderbilt University Medical Center (VUMC) and the All of Us Research Program. We assess discrimination between real and synthetic data over time. We evaluate the generation process and critic model using the area under the receiver operating characteristic curve (AUROC). For the critic, a higher value indicates a more robust model for quality assessment. For the generation process, a lower value indicates better synthetic data quality. RESULTS: The LS-EHR critic improves discrimination AUROC from 0.655 to 0.909 and 0.692 to 0.918 for VUMC and All of Us data, respectively. By using the new critic, the LS-EHR generation model reduces the AUROC from 0.909 to 0.758 and 0.918 to 0.806. CONCLUSION: LS-EHR can substantially improve the usability of simulated longitudinal EHR data.


Assuntos
Saúde da População , Simulação por Computador , Registros Eletrônicos de Saúde , Retroalimentação , Humanos
16.
AMIA Jt Summits Transl Sci Proc ; 2022: 359-368, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35854721

RESUMO

Hormonal therapy (HT) reduces the risk of cancer recurrence and the mortality rate for patients with hormone-receptor-positive breast cancer. However, it is estimated that half of the patients fail to complete the standard 5-year adjuvant treatment protocol. We investigate the extent to which certain types of structured data in electronic medical records (EMRs), namely conditions, drugs, laboratory tests and procedures, as well as when such data is entered EMRs, can forecast HT discontinuation. Our experiments with EMR data from 2,251 patients showed that machine learning models based on these data types achieve fair performance (AUC of 0.65). More importantly, the performance was not statistically significantly different when fitting a model using all or only one feature type, suggesting that the model is robust to missing information in the EMR.

17.
Stud Health Technol Inform ; 290: 503-507, 2022 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-35673066

RESUMO

Telehealth is an alternative care delivery model to in-person care. It uses electronic information and telecommunication technologies to provide remote clinical care to patients, especially those living in rural areas that lack sufficient access to health care services. Like other areas of care affected by the COVID-19 pandemic, the prevalence of telehealth has increased in prenatal care. This study reports on telehealth use in prenatal care at a large academic medical center in Middle Tennessee, USA. We examine the electronic health records of over 2500 women to characterize 1) the volume of prenatal visits participating in telehealth, 2) disparities in obstetric patients using telehealth, and 3) the impact of telehealth use on obstetric outcomes, including duration of intrapartum hospital stays, preterm birth, Cesarean rate, and newborn birthweight. Our results show that telehealth mainly was used in the second and third trimesters, especially for consulting services. In addition, we found that certain demographics correlated with lower telehealth utilization, including patients who were under 26 years old, were Black and/or Hispanic, were on a state-sponsored health insurance program, and those who lived in urban areas. Furthermore, no significant differences were found on preterm birth and Cesarean between the patients who used telehealth in their prenatal care and those who did not.


Assuntos
COVID-19 , Nascimento Prematuro , Telemedicina , Adulto , COVID-19/epidemiologia , Feminino , Humanos , Recém-Nascido , Pandemias , Gravidez , Nascimento Prematuro/epidemiologia , Nascimento Prematuro/terapia , Cuidado Pré-Natal/métodos , Estudos Retrospectivos , SARS-CoV-2 , Telemedicina/métodos
18.
Stud Health Technol Inform ; 290: 1032-1033, 2022 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-35673191

RESUMO

Telehealth is designed to provide health services through the use of electronic information and telecommunication technologies. It has quickly become an important tool to ensure continued care in response to the COVID-19 pandemic while mitigating the risk of viral exposure for patients and providers. This study compared the number of monthly telehealth visits in primary care settings at a large academic medical center from 2019 and 2020. To investigate what health conditions are suitable for telehealth visits, we report on the ten ICD-10 codes with the largest number of telehealth visits.


Assuntos
COVID-19 , Telemedicina , Centros Médicos Acadêmicos , COVID-19/epidemiologia , Humanos , Pandemias , Atenção Primária à Saúde
19.
J Am Med Inform Assoc ; 29(9): 1584-1592, 2022 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-35641135

RESUMO

OBJECTIVE: Deep learning models for clinical event forecasting (CEF) based on a patient's medical history have improved significantly over the past decade. However, their transition into practice has been limited, particularly for diseases with very low prevalence. In this paper, we introduce CEF-CL, a novel method based on contrastive learning to forecast in the face of a limited number of positive training instances. MATERIALS AND METHODS: CEF-CL consists of two primary components: (1) unsupervised contrastive learning for patient representation and (2) supervised transfer learning over the derived representation. We evaluate the new method along with state-of-the-art model architectures trained in a supervised manner with electronic health records data from Vanderbilt University Medical Center and the All of Us Research Program, covering 48 000 and 16 000 patients, respectively. We assess forecasting for over 100 diagnosis codes with respect to their area under the receiver operator characteristic curve (AUROC) and area under the precision-recall curve (AUPRC). We investigate the correlation between forecasting performance improvement and code prevalence via a Wald Test. RESULTS: CEF-CL achieved an average AUROC and AUPRC performance improvement over the state-of-the-art of 8.0%-9.3% and 11.7%-32.0%, respectively. The improvement in AUROC was negatively correlated with the number of positive training instances (P < .001). CONCLUSION: This investigation indicates that clinical event forecasting can be improved significantly through contrastive representation learning, especially when the number of positive training instances is small.


Assuntos
Saúde da População , Registros Eletrônicos de Saúde , Previsões , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...